AI Policy: Global Stakes, Local Governance
Margaret Woolley Busse, executive director of the Utah Department of Commerce and cofounder of the state’s Office of Artificial Intelligence Policy, discusses the regulatory environment for artificial intelligence in the United States, with particular focus on its implications at the state and local level. Adam Segal, Ira A. Lipman chair in emerging technologies and national security and director of the Digital and Cyberspace Policy Program at CFR, assesses the global race in artificial intelligence and why these dynamics matter for U.S. national security and the strategic competition with China.
TRANSCRIPT
FASKIANOS: Thank you. Welcome to the Council on Foreign Relations State And Local Officials Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR.
CFR is an independent, nonpartisan national membership organization, think tank, educator, and publisher focused on U.S. foreign policy. CFR generates policy relevant ideas and analysis, convenes experts and policymakers, and is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy.
Through our State and Local Officials Initiative, CFR serves as a resource on international issues affecting the priorities and agendas of state and local governments by providing background and analysis on a wide range of policy topics. We appreciate your taking time to be with us for today’s discussion. We are delighted to have more than 600 participants from fifty-two U.S. states and territories.
And I want to again remind everybody that the webinar is on the record. The video and transcript will be posted on our website after the fact at CFR.org, and we will circulate it as well.
We are pleased to have Adam Segal and Margaret Woolley Busse speak on artificial intelligence and its impact at both eh local and international level. We’ve shared their bios. I will give you a few highlights.
Adam Segal is the Ira Lipman chair in emerging technologies and national security, and director of the Digital and Cyberspace Policy Program, at the Council on Foreign Relations. He was also the project director for three CFR-sponsored independent task forces on cyberspace, on securing a resilient internet, and Chinese military power. Those are all available on our website. And he was a senior advisor in the State Department’s Bureau of Cyberspace and Digital Policy. He is also the author of The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age, and he contributes to the CFR blog Net Politics.
Margaret Woolley Busse is the executive director of Utah’s Department of Commerce as a member of Governor Spencer Cox’s Cabinet. She created their Office of Artificial Intelligence Policy, tasked with providing regulatory relief to qualifying AI-focused companies and making regulatory policy recommendations through its policy learning lab. She is also a commissioner for the Utah Economic Opportunity Commission and a member of the Talent Ready Utah Board.
So thank you both for being with us today.
Adam, I thought we would begin with you to give us a high-level overview of how national governments are leveraging AI to advance economic, military, and diplomatic goals, and who is leading the strategic competition for AI globally.
SEGAL: Sure. Thanks, Irina.
So I’m going to, I think, make some broad generalizations about what we’re seeing in China and the United States, the two main competitors in this space. And we are very clearly seeing the emergence of two separate tech stacks—so how both countries build the technologies that are going to power artificial intelligence, but also different types of markets and how the markets respond to and invest in AI opportunities.
We see the Chinese focusing a lot on indigenous innovation and self-reliance. Last week the Communist Party met and they leaked some of the ideas that are going show up in the next Five-Year Plan, which will guide China from 2026 to 2030. There’s a lot of focus on self-reliance, and using AI in a range of industries and in the military.
But one of the big differences is that the debate in the U.S. is very much focused on AGI—artificial general intelligence—and the race for superintelligence. The Chinese seem much less interested in that, and are much more focused on actual applications—industrial applications and how they’regoing to diffuse it quickly into their economy. So while U.S. VCs—venture capitals—invest heavily in large language models, generative AI, and enterprise artificial intelligence, Chinese VCs are investing in robotics and AI manufacturing. Forty percent of Chinese VC investment is in manufacturing and the applications of AI; about 3 percent of U.S. VC investment is. We’re beginning to see some change in that. You saw very prominent proponents of AI like Eric Schmidt writing op-eds saying we’ve been overly focused on artificial intelligence; we need to think more about how we adapt it and diffuse it, because diffusion is really the key to winning this battle.
And the numbers are—when you look across both societies are pretty similar. About 85 percent of professionals in China say they have used generative AI in their daily work; about 65 percent of American professionals—you know, not that far off. Fifty percent of large firms in China are using AI in some way, shape, or another; about a third in the U.S., according to this one survey. So U.S. is falling—there’s a little bit of a gap there with China, but all along the technology stack the U.S. still has significant leads among chips, language models, power, compute. And energy may be the one place where China has the really clear advantage.
I’ll stop with just one final thought about our roles in the world. And the president and the vice president in particular have talked a lot about AI dominance, and having other countries use American AI and American chips in everything. My concern is that we may be recreating the 5G story, so the fifth generation of telecommunications story, which is where China actually was the supplier for most of the developing world because Chinese product was cheaper, about a third cheaper than Nokia and Ericsson and Samsung, and Chinese firms were on the ground. And so Chinese AI models, while they may not be as good as the U.S. models, are open-weight, they’re more adaptable, and they’ve been cheaper so far. And Bloomberg last week had a story about how OpenAI and Google are both being outcompeted in Africa by DeepSeek, which is the Chinese company that kind of surprised us all about its big breakthrough. So that’s one thing I think we should think about as we kind of push forward with diffusion and adaptation.
And I’ll stop there.
FASKIANOS: Thanks, Adam.
Margaret, let’s go to you to talk about: As the competition over AI intensifies, what role do you see for state and local governments in shaping broader U.S. policy? And what tools—you know, if you can share what you’re doing in Utah and for other officials across the country what they can do to balance kind of scenarios like innovation and workforce development and governance.
BUSSE: Yeah. Thank you, Irina. And I loved just hearing from Adam. It’s great to be able to sort of zoom out. We’re going to zoom in a little bit on what states are doing and what Utah is doing in particular.
Irina mentioned that we created really the first Office of AI Policy, if you will, in terms of the functions that it has. And we created that really out of a notion of wanting to do three things in the age of AI with emerging risks and emerging opportunities.
We wanted to find ways to protect the public. We were just kind of coming off of all of the information that we started to understand as a society on the risks of social media, and then we had AI. And so I think there was a lot of concern about how some of these LLM chatbot models could be exploitative in the same way that social media has been. And so we wanted to have that as one goal, protect the public.
We wanted to also encourage innovation. Huge opportunities everywhere. There’s almost a land grab going up right now everywhere to try to apply AI in many different sectors, which is so exciting.
And then, thirdly, we wanted to make sure that we could as a government observe and learn what was happening so that we could make good policy recommendations as we go, so we could be agile.
So we set up our Office of AI Policy. The legislation that created the office passed in early 2024. We set up the office July 1 of 2024. And the office really has two tasks or two functions.
One is it runs what we call a learning lab, and we take on different issues that have to do with regulation and make recommendations to policymakers—to the legislature here in Utah. And so we first took on mental health therapy bots because we saw a lot of risks but also a lot of need for those bots, and so we took that on and got legislation passed that really did a lot of protection but also created a pathway to innovation for those engaging with mental health chatbots. And then right now we’re looking at AI companions and we’re looking at deepfakes. So we sort of pick certain areas where we either maybe need to regulate given this new technology, or maybe we need to deregulate or re-regulate given the new technology, and we can make those recommendations to the legislature after sort of a robust process of research and stakeholder input.
The second function, which is really exciting, is what we call our regulatory sandbox. So it offers regulatory relief/regulatory mitigation to AI companies, as Irina alluded to earlier. And what this does is it enables us to really be able to encourage innovation across the state where companies are bumping up against regulatory barriers. And so this has been really exciting. We’ve been able to help many companies so far, and we have a lot more in the queue, with regulatory relief. And what that does is it allows them to innovate, and get going with their—with their companies, and with their models, and with their experimentation. And then it also allows us to learn. Maybe we need to re-regulate. So we can use those learnings, again, in our learning lab to then be able to make recommendations to the legislature.
So that’s really exciting. We’ve been sort of recognized nationally and internationally for this approach, this kind of a balanced approach, and we’ve learned a lot in the process.
You asked, Irina, a little bit about sort of the relationship of innovation, workforce development, and governance, all of those things together. And what’s interesting is I think workforce development and government—governance of AI really relates directly to innovation. Having the right workforce in place enables innovation, enables companies to take advantage of workers that can do the work. And then having the right governance structure is incredibly important, too.
One of the things that was almost surprising to us as we were formulating our ideas around, you know, when we—which ultimately sort of culminated in the Office of AI Policy was that our companies here in Utah, our AI companies, were saying: We need to have the public trust us. And oftentimes having a good regulator regime helps to enable that trust in putting guardrails into place so everyone’s on the same playing field and the public can feel confident. And again, coming off of sort of the—the sort of scandals and the revelations about the level of exploitation and harm from social media, I think AI sort of walked into that with a low level of trust, and there is a lot of anxiety. And I think it will hamper AI companies going forward and potentially our whole industry here in the U.S. as we’re trying to be competitive with China and other countries if we don’t get this right.
And so I see the states’ role here as really being able to provide a template for how the U.S. could approach AI regulation, but also not trying to take the place of what the U.S. should do. We’ve focused—in our sort of learning lab regulatory recommendations, we focus—we’re focusing on more narrow issues and not trying to regulate the full development of an LLM, for instance; instead, just more the application or the deployment where we think there could be harm or, again, where we need to clear out regulatory barriers. And so we’re hoping that the—that the federal government can take a page out of our playbook, and look at what we’re doing, and seeing what works, seeing what doesn’t work, and also on the regulatory mitigation side.
So we’ve actually had discussions with our federal partners about creating federal-level AI regulatory sandboxes that would be explicitly coordinating with state-level sandboxes like ours. And we’ve had several states reach out to us because they want to create something similar. Wouldn’t it be amazing if you’re a company and you can go to, you know, Utah’s sandbox, you can get regulatory relief from the state but also from the feds? That adds a ton of value. It’s sort of the whole in this case is much, much better and greater than the sum of the parts.
So I think state ideas are—some are good, some are not as good that are coming out of—across the different states, but they really can serve as a way to inform our federal policymakers on a good approach. And I hope that they come to one eventually because, you know, unfortunately, we’ve had a Congress that has typically been sort of afflicted with a—with a fair amount of sclerosis and is not moving on many different issues. So we think this is a place where the states should play, and experiment, and get good laws on the books. And we hope the feds will learn from them.
FASKIANOS: Fantastic. Thank you both. This is a great start to the conversation.
And we want to turn to all of you now for questions/comments, share best practices or what you’re doing in your state.
(Gives queuing instructions.)
So I’m going to go first to—let’s see; I’m looking, have to get to the right things—I’m going to Burt Fecker (sp), if you could unmute and identify yourself.
Q: Can everyone hear me?
FASKIANOS: We can.
Q: Fantastic. Ms. Faskianos, as always, scintillating conversation and discussion. Mr. Segal, Ms. Busse, thank you for a really informed topic.
Just to preface a little bit, I am the subject matter expert for commissioning liquid-to-chip-cooled hyperscale datacenters for my company, and I commissioned the first of kind in the world liquid-to-chip-cooled hyperscale. And in the unique position that I have working in—as a policymaker, as an elected official counterpointing that, it’s a topic of enormous importance to me.
Over the last two days, I actually had the privilege of going to meet with the embassies of Australia and the U.K., India. Met with Indonesia, representatives from the embassy in Indonesia, as well as Canada; and then had meetings in Senator Cruz’s office as well as a couple congressmen. And this was a topic I was heavily bringing up because my push is to see what we could do for soft diplomacy from a city-to-city perspective, which can survive the shockwaves, if you will, of foreign policy regardless of administration.
The last time Ms. Faskianos invited me to speak I—you know, Bretton Woods is something that lives heavily in my mind, and I proposed something which was maybe what that 2.0 looks like is bullion to electrons. And the one thing that I can say for certain is right now there is a shift in the Pan-Pacific. And that is evidenced by AUKUS, augmenting the Five Eyes, and we’re going to have an enormous amount of perhaps communication that goes back and forth that depends heavily on not just AI supremacy in the form of the military, but also a deep level for quantum computing and cryptography.
In order to get that to happen, I think one of the things that we all, respectively, have to work on—and I appreciate what you’re doing in Utah to sort of create templates. I’m trying to figure out what we could do here in Texas as well. And one of the things that I also proposed to Senator Cruz’s office was a permitting perspective. We’re going to need power, and that power has to either come in the form of small modular reactors but also what could we do to augment existing power systems—so when it comes to natural gas, having hydrogen fuel-cell power plants, things that also will add value in terms of water creation.
The other issue, at least in Texas and some of the other areas, is the fact that the education that—the education and dissemination from within the datacenter industry to local municipalities and state municipalities of the actual effect of datacenters, and the need for collaboration when it comes to, you know, design, whether it’s closed-loop systems, or are they using water or propo-alco-glycol (ph). So in that aspect, my question to you, Ms. Busse, and to Mr. Segal: What do you think we could do to figure out city-to-city, or in a sense state-to-state, sort of exchange not only of ideas/culture, but also technology so this industry continues to grow, and we maintain that sort of American influence and diplomacy through the use of artificial intelligence, and the moral compass, if you will, of the American style of doing business, while maintaining those supply chain logistics with countries that want to come and partner with us?
BUSSE: Well, this is the—(laughs)—this is, like, a massive question, Burt (sp), and I think it’s really the crux of the issue across many different dimensions, frankly. So thank you for the question.
I don’t have a ton of expertise on the datacenter side and the energy side, although I do have some. Governor Cox, whom I work for, has announced what we call Operation Gigawatt, with a goal to double the amount of energy production we have in Utah by—in ten years. And small nuclear reactors are certainly part of the equation. We’re looking at sort of all of the above, if you will, on that front. I’m involved in the datacenter discussion a bit as well, because we actually regulate public utilities within our department as well and so we’re familiar with those things.
I’ll just share a few thoughts. One is on the datacenters piece. So I have no problem with datacenters coming in, and I think that we can—in America and certainly here in Utah, we can build the energy that we need to power them. I really believe that. I think where we have to make sure that we’re doing it in a way that is not—is not causing ratepayers to pay more money for their energy, and is not causing disruption in communities and causing problems with the airshed, if you will, in different communities. We just have to do the analysis, and we have to decide who is paying and how and when. And I don’t—I actually don’t think it’s as much of a challenge as some people have sort of made it out to be. Frankly, the companies that need this power to power the datacenters are truly the richest, wealthiest companies we’ve ever had in the history of the world, and so we just need to figure out a way to make sure that they are paying their fair share as we get these datacenters up and going. And there’s a lot of calculations you have to do to make—you know, to figure that out, but I think that’s the important thing to do.
And we’re in the middle in Utah—we’re in the middle right now of figuring that out—what is sort of the algorithm, if you will, for where to put stuff, and how do we know—understand the tradeoffs we’re making in different communities, and those kinds of things. Some municipalities are incredibly excited for these because of the tax base it can contribute. So I think there is some opportunity there, but I think it has to be thought through very carefully. Because, again, I think just as some of the risks that are coming from AI that I talked about previously in terms of exploitation and manipulation, those kinds of things, I think that it can give AI a black eye when it doesn’t need to. And similarly, the datacenter conversation and controversy can give AI a black eye, and it doesn’t need to. People will end up—if they feel like their energy costs are going way, way up because of AI and datacenters, they’re not going to be feeling good about it, and they’re not going to care if we win against China. So I think we can go about this in a way that doesn’t make people’s lives worse—(laughs)—and makes it better, and then we can really feel good and confident and want to be winning this race.
I also think you said something interesting about, you know, beating China and doing it in the right way. I think that’s a really important principle here, because sometimes I hear the argument that we can’t regulate AI at all and we have to sort of, you know, be willing to, you know, have ratepayers pay way more for energy and all this stuff, all so we can beat China. I just don’t think that’s true. I think that we don’t need to sacrifice our kids’ safety on Meta’s AI companion app, right—(laughs)—where they’re explicitly saying it’s fine for eight-year-olds to be able to have sexual conversations. We don’t need to do that to beat China. Those things are not the same thing. And frankly, we know that China doesn’t even allow a TikTok product in their country because of the damage to their kids. (Laughs.) They do it here, but—they’re deployed it here, but not in their own country. So we don’t—we need to be clear-eyed as we think through what is the right way to be deploying and developing AI, because doing it in the right way is I think what actually is going to allow us to win the race against China. We don’t want to be bad actors; we want to be the good actors in this space.
So I’ll stop there.
FASKIANOS: Adam, do you want to comment on the competition with China?
SEGAL: Yeah—
FASKIANOS: Yeah.
SEGAL: Well, I think you put your finger on something really important, which is kind of subnational diplomacy, as it’s sometimes called. But you know, there is a lot of churn right now and kind of questions about who wants to work with us, and what this—you know, what the tariffs’ impact might be on alliances and those type of cooperation.
But I think, you know, you mentioned Quad—you mentioned the Quad, which has a section of it that deals with tying ecosystems together in the four—the four partners, so India, Japan, the United States, and Australia. That is really a private-sector-led part of the Quad. And there—U.S. and Indian VCs—in the last couple of months that relationship has become more difficult—have spoken about their desire to maintain cooperation around AI and other emerging technologies. So I think, you know, there’s a—there’s a role for local governments to play in kind of talking to their local VCs, figuring out who they’re partnering with, and making sure that those contacts keep going. And I think there’s really a big demand for it that you would—you know, you would find across—especially in the Indo-Pacific, as you—as you said.
FASKIANOS: Thank you.
I’m going to take a written question from Senator Melissa Wiklund of Minnesota for Ms. Busse—and I think, Adam, you can also weigh in on this: Has Utah been able to engage or work with the top tech companies that are large and national/international, like Google, Meta? Do you have a structure or process for companies to engage with your offices, and is there an impact if a company doesn’t follow the process? And, Adam, I think you can maybe weigh in from your experience, having been in the—at the federal level dealing with these companies. So, Margaret, over to you.
BUSSE: Yeah. Great question, Senator. And I know Minnesota’s done some really great—some great things in this space.
We do engage with these companies. You know, all of these companies have in-house lobbyists here in Utah. Actually, not all of them do, but most of them do. And so we do engage with them. And you know, I can say that they’re—we have worked with them and we have often implemented—you know, changed some of the—tweaked some of the language in our different bills because we believe what they—the feedback they have brought has been actually beneficial.
I think sometimes there’s a little bit of a tendency amongst these companies to maybe engage a little bit but not engage as much as they might because they sort of feel like they have the backstop of being able to sue us. And we have been sued, actually. We were the first state to pass legislation to regulate social media for minors, and we got sued for that. And I think that a lot of these issues around whether or not sort of algorithms and different things that we would consider product features they would argue are a function of free speech. Those need to be meted out at the Supreme Court. So I think there’s a little bit of a waiting game on that for a lot of these sort of tech policy issues.
But, yes, we will—our whole policymaking process is one of engagement, and so we do that. The question is, how sincerely are they engaging with us? I think some are more than others.
SEGAL: Yeah. I mean, I think we—look, we saw the same thing at the international level, right, that the big tech companies engage with some nations and don’t engage with others. Depends how much leverage they think you have or, you know, how badly they want access to a market, how capable the local government is they’re dealing with. So I think, you know, having a kind of consistent message and capability at the local level is something that gets their attention. And it sounds like, you know, Utah, with its being at the cutting edge, has things to offer to them.
But, look, we know that the firms aren’t crazy about local regulation. And you know, since they are afraid nothing’s ever going to happen at the national level, they—you know, many of them were behind the efforts to pass the moratorium for a ten-year ban on local legislation, and I think they’ll continue to do that and they’ll—they will mobilize the argument that Margaret mentioned before, which is that, oh, you know, if we regulate we’ll lose to China. And I’m totally on the same side that she’s on on that; I think that’s the wrong argument and, in fact, with the right type of regulation, we’ll both innovate faster and be trustworthy to international partners.
FASKIANOS: Thank you.
I’m going to take the next question, a raised hand from Pennsylvania Representative Ben Waxman.
Q: Hi. Thank you so much for this conversation. Thank you to all of you for doing this.
You know, my question really is, what is the impact on workers? Especially we see more and more—there was just some big announcement from Amazon that came out recently of just large, large numbers of workers that are expected to be displaced as a result of AI and related implementation of technology. I have circulated a co-sponsorship memo—I’m curious to get your thoughts—about the concept that, you know, companies that eliminate large amounts of positions should be forced to pay some kind of a windfall either surcharge or tax based on the number of jobs that are eliminated, not only to provide support to the workers who are, you know, going to be laid off or lose their job but also to retrain them as well. So just curious about the reaction to that or what other people are thinking in terms of the impact on the workforce, in particular on people, you know, who maybe are going to—are going to be out of work and facing a lot of challenges in terms of retraining to be able to get the skills they need to get back into the workforce, especially if they spent thirty years professionalizing in a particular trade or something and they’ve put a lot of money into education or training or something like that, not something that can be just overnight, you know, with a couple of online classes be recaptured.
BUSSE: Adam, do you want to go first?
SEGAL: I’ll go first because I have no experience or actual practical implication. (Laughter.) And then you can follow up and correct me.
I think, you know, look, as we—a number of studies have come out over the last six months that have identified a number of sectors that seem to be very vulnerable, a lot of them white collar—you know, so lawyers, coders, and other places. And we’re beginning, as you said, to, I think, start to see some of that impact.
I also think we’re very early on and we may find that the companies have gone—discovered themselves that they’ve gone too far. So what—I was at a conference last week, cybersecurity conference, and so everyone was saying, well, we’re—you know, we’re not hiring any new coders because we realize we don’t need them anymore, but then we also realize we’re going to need people five years down the line and we’re not going to have them because we haven’t hired anybody. And the—and the, you know, vibe coding in the AI is actually not providing us what we need, so we’ve started hiring some people back.
I think your—you know, the larger issue you raise is about how do you deal with, you know, massive disruption in the workforce. You know, I would be supportive of those ideas. Unfortunately, you know, we don’t have a great track record of it. We had lots of—you know, with the first and second waves of globalization, we had—we had worker training and other things that were supposed to be in place that were supposed to be paid for by taxes and other things, and then were all—were all gutted. So, you know, I don’t know if we would have the political support for them, although I think that those types of things are going to be important—although what that training is going to look like I think is also a big question right now.
BUSSE: So, yeah, this is just a fascinating time for our country. It is going to be incredibly disruptive. I think everyone can sort of see that, but nobody exactly knows how the disruption’s going to play out.
I agree with Adam’s observation that I think what’s happening right at this moment is there’s a little bit of a race to the bottom because every—all the companies are seeing other companies maybe lay people off or not hire, and they’re all like, OK, we’ve got to be able to make this work too—(laughs)—and we can surely get by with fewer workers given the new capabilities of AI. But nobody really knows. I think most of these companies are not making the decision because they know precisely what their capabilities can be right now with AI that will be able to replace workers. But they’re just kind of—have a sense that they will need fewer workers, and they don’t to be on the hook for workers that they hire now or that they don’t get rid of now. And so I think there will be a little bit of a shakeout in another few years as companies start to really realize what they are able to gain functionally from AI and what they’re not, so I think we’re kind of at a bad point right now. But I don’t know; I could be wrong, could get much worse in terms of these companies laying folks off.
I think—I don’t know if I agree with the full sort of stick approach of sort of punishing companies, to do that, because there is this natural thing that just happens as technology takes place and there is this kind of creative destruction, right, that we sometimes talk about that kind of has to happen. But I think that maybe we take maybe a carrot approach with companies. I do believe that given the just breakneck pace in changes in workforce needs given technological change that industry has to be part of the training solution. Our institutions of higher ed, our technical colleges are just never going to be able to catch up with that change. And I think we need to shift more of the burden of training—I’m talking about writ large—to our companies. I think for a long time they have gotten kind of a free ride from the individual paying for college or training and the state subsidizing that. If you think about where value comes—where value accrues from workforce training, it goes to the employer, it goes to the worker, and it goes to the state in terms of tax revenue. But it’s really been mostly, you know—at least the initial training has been mostly the state and the person paying for it. And so I think, you know, having the companies, having the employers more involved in training I think has to be part of the model going forward, and I think providing incentives to do that could be a really interesting policy play.
FASKIANOS: Thank you.
I’m going to go next to a written question from Bill Taupier, director of administration at the Massachusetts Department of Industrial Accidents: Do you have any concerns about AI being vulnerable to hacking and manipulation by governments or criminal organizations attempting to spread misinformation or deliberately changing data outcomes, particularly where it comes to financial markets?
BUSSE: I mean, yes.
SEGAL: Yes. (Laughter.)
BUSSE: You want to go, Adam?
FASKIANOS: Adam, you first. (Laughs.)
SEGAL: I think those are—those are all legitimate concerns. I think, you know, AI as a cybersecurity risks is both in how AI is going to be adopted by bad cyber actors—and so we’ve seen both criminal and nation-states starting to use AI, mainly to help them with speed and scale, partly on the disinformation side. It, you know, helps them get colloquial English better and identify who they want to target. And so I think we’re definitely already seeing that.
And then the systems themselves are going to be very vulnerable to either cyberattacks, data poisoning. You know, Margaret had mentioned trust, and so the question will be—is, how do these companies, you know, assure us that the data is acting the way that they think it is? Because they often don’t—you know, don’t know. And safety metrics are not, you know, nationally accepted. Everybody measures safety and assuredness in different ways. And so being able to have that transparent, I think, is a—is a real concern.
So, yeah, I think all of those things are definite concerns and threats. And that’s, you know, definitely a role that government and regulation should play, is, you know: What are those—how do we think about safety metrics? How do we think about transparency? How do we think about liability?
BUSSE: Well, first of all, I lived in Massachusetts for twenty years and I never knew that there was, what was it, an Office (sic; Department) of Industrial Accidents I think is what it was? (Laughs.)
FASKIANOS: Yes.
BUSSE: I’m glad that we were covered in that way.
I do think there’s incredible risk. I mean, this is the—this is, I think, the biggest tradeoff of all with really being in the digital age, is that now we have vulnerabilities that we just never contemplated before. We have very different vulnerabilities than what we had a hundred years ago and the rest of the history of mankind on Earth, right? And so I think—I think this is a major area of concern, and we are going to have to beef up cybersecurity massively. And I don’t have a lot of expertise in it; it’s just sort of looking at the—at the landscape.
But I do think this brings up an interesting topic of just generally how we handle data. Data is the engine of AI. It is the currency of AI. And it is—it is the currency of all these companies, right? And it’s our data. (Laughs.) And we have not passed any federal legislation on data ownership, data privacy, data control. And here we are, it’s literally currency, and we—and we don’t have any laws around it that govern it at a federal level. And talk about a patchwork. Companies complain about the patchwork, and there is a patchwork of data privacy laws across the United States. But those same companies have actively lobbied against the federal efforts to do something about it, and have killed those bills in Congress. And so I think just generally we need to get a better grip on data. How do we get any remuneration, how it flows, how we’re vulnerable. I think generally we don’thave a lot of great data hygiene. Most companies don’t. And I think that’s something that we have to majorly be investing in.
FASKIANOS: Adam, anything to add, or shall we go on?
SEGAL: No, just that, you know, every CFR task force that’s ever dealt with cybersecurity, or data, or anything else like that, basically says we need a national privacy law, so.
FASKIANOS: Yes. OK.
We’re going next to Mackenzie Pope from the Alaska Senate.
Q: Good morning. (Laughs.)
FASKIANOS: Good morning.
Q: Greeting you from beautiful Dena’ina lands here in Anchorage, Alaska.
My question actually kind of dovetails off of that. I’m on the younger end. I’m a Zillennial. And, as a younger person, you know, our role as state legislatures and local governments is to provide good oversight, most of the time, you know? (Laughs.) And so how can state legislatures and staff, like myself, you know, we’re seeing Congress behind the ball on data privacy. We, as, you know, Ms. Busse spoke to, we already are behind the ball on regulating safety on existing social media. And I’m really worried about the lack of tech policy literacy at the state legislature level, if we don’t have it at Congress. You know, it’s hard for even our smaller states. And I’m really concerned because in the resources available to me, it feels like the people writing it are really parroting the, you know, press release language from these AI companies and not doing their own independent analysis of what information is being provided from the AI companies themselves.
And so I’m wondering how we, as oversight bodies, can find good critical analysis tools to make sure that we are on the ball when evaluating the, you know, AI proposals, the data center proposals that are going to come before each of our state legislatures, when it feels like there isn’t too much independent analysis of these companies, that isn’t originating from them, which is just not information I trust. Which—I don’t want to ask a secondary question, but that goes to—you know, Sam Altman is one of the only people saying there isn’t an AI bubble. And he’s the last person I would trust on that topic. (Laughs.) And so how do we get, you know, good information to make that economic analysis, to make that critical analysis of these tools that our government already is kind of behind the ball, on talking about? Thank you.
BUSSE: So I’ll take a stab at that one. I think it’s a very difficult problem. I think what you’re really touching on—and, by the way, I wish you could turn your camera on because where you said you’re from sounds amazing. But what you’re really getting is this really asymmetry of information, of knowledge. And that has massively increased with the introduction of LLMs, and just fundamentally how they work. And you just have a very few people that really understand, and now it’s being deployed at a population level.
And then as policymakers, we’re supposed to be able to deal with that. And we saw that asymmetry, which I don’t think was anywhere near as bad as it is now, how the negative effects of that, with the social media crisis. Because policymakers just fundamentally didn’t understand what was going on. They didn’t understand the business models of the sort of surveillance capitalism, collecting your data, selling it, sharing it, serving you up targeted ads, which means that their whole business model is engagement. And, you know, you guys know all of that. But the point is that that asymmetry has gotten even worse.
One of the things, when I was hiring for a director for our Office of AI Policy, that I wanted—that I decided I was going to prioritize is, I’m not going to get a policy person. I’m going to get somebody that fundamentally understands how these systems work. So I hired a professor of applied mathematics who was doing research in AI from Brigham Young University, who’s now running our lab—or, our office. And so we need to get more of those people into government. The problem is, it’s hard because the companies can employ them for far more money. So there’s asymmetry there, right, as well.
But I think there’s enough people that really care about this moment right now and wanting to do the right thing that I think there’s enough people out there that would want to be working at a policy level that have the expertise we need. And I think we’re seeing this pop up across the country. The thing is, it’s just so nascent. And everyone’s just kind of had to react. You’re seeing these different policy centers pop up everywhere. And I think that’ll be a good thing, because I think they are going to be the organizations that can provide the resources that you’re talking about, Mackenzie.
On the data center piece we’ve literally—we’re literally trying to develop our own tool in Utah so we can put all those sort of inputs in of, like, we care about air shed, we care about location, we care about ratepayers, all of those things, and to see if we can come up with a formula of, like, this is how it’s going to be acceptable to have data centers. And but that—it requires analysis. And the problem is, typically states don’t have that level of sophistication. I don’t know about you guys, but I know our state legislature, they’re all part time. And they have full-time jobs. And they don’t have a lot of resources in terms of staff. And so I do think it’s a problem that you’ve identified. And I hope that just the American people will rise to this moment, people that care. And I know this sounds kind of, I don’t know, gauzy, but I do think people do care about this moment. And the amount of uncertainty that everyone has about what the future looks like is motivating a lot of people to be involved.
SEGAL: So Meredith Whittaker, who is the CEO of Signal, actually gave an interview, I think it was last week, in Politico, or the week before, where she essentially said, you know, policymakers at all levels should just ask what they think are stupid questions, right? There shouldn’t be the feeling that, oh, you don’t understand the tech, or, you know, being bullied into—you know, you don’t have the sophistication to do it, because, probably from your gut instinct you are asking the right kind of question about your constituents’ needs. So I think that is one answer. I think, you know, definitely you want the expertise for the type of policymaking that you’re talking about.
I think also, quite honestly, as a younger person, you have a role in playing in embarrassing the older people. When I was in the State Department I worked for Ambassador Fick. Ambassador Fick had been an entrepreneur, had started his own cybersecurity company. And his argument in the department was—especially when, you know, we would meet with people that have been in the State Department thirty or forty years, like, well, I don’t understand technology. And he would say, you can’t throw your hands up and say, I don’t understand China, right? That’s not a legitimate response in this day and age. And so I think you could probably make the similar kind of argument with some of your colleagues, that they have to do their own education. It’s not up to you to bear that.
I only know of resources, I think, mostly at the national level. So, you know, TechCongress, which helps place young scientists and young technologists into offices. And Aspen Digital has a similar program where they take technologists and then place them in the local government offices. You might be able to convince Craig Newmark to do something. Craig Newmark has been spending a lot of money on cybersecurity at the local level and getting more people involved. So those might be some places to look at and try to—you know, maybe you can get your local university to do a boot camp. You know, I know Stanford runs a number of boot camps for
legislative assistants at the congressional level. Those all might be ways to try to address it at the local level.
BUSSE: And, by the way, there’s so much jargon that people like to throw out, and it can be very intimidating. And I think it actually becomes kind of exclusionary and it’s done on purpose. I think we can all understand this at some level to be able to make good policy.
FASKIANOS: Wonderful.
I’m going to take the next question from Maine State Representative Lori Gramlich, who is the assistant house majority leader: Curious about enforcement for AI and therapy. How did you enforce this? And what are you doing with companion bots and minors? I’m working on draft legislation in Maine to address companion bots and minors accessing AI companion bots for therapy. I think, Margaret, this is for you.
BUSSE: Yeah. So what we did when we—you know, we had just stood up our Office of AI Policy. We wanted to get it before—we have a short legislative session that runs from sort of mid-January, beginning of March. And so we wanted to get something going quickly. And we wanted to hit an issue that was impactful, but we could—sort of we had the capacity to do quickly. And we fell on mental health therapy bots because they were popping up. There is a mental health crisis. There’s a need for excess capacity. But at the same time, you can see that there would be massive potential for harm, and harms that we were seeing. You know, there were—I was reading articles about therapy bots that were recommending suicide because they didn’t—this is just, like, actual therapy bots, not just the companion bots we’ve heard about since then. But just not understanding, you know, hallucinating, and then those kinds of things.
And so the other—of course, the other danger that we recognized is, you know, this whole sort of data collection surveillance model that is deployed typically by these kinds of companies was a very bad idea in the case of mental health therapy bots, because now the information you’re putting in about yourself, that you’re sharing, is the most personal, the most sensitive data that could ever be thought of, right? And so what we did in our—but we also realized that there was a regulatory barrier, because we, as a state—in fact, our department says you have to have a license to practice mental health therapy in our state. And so that means we could go after a mental health therapy bot for not having a license, because we don’t issue licenses to bots.
And so what we did is we first said—we attacked the privacy issue. And we said you can’t share or sell the data that you’re collecting on people—just period, full stop. It’s private. It’s sensitive. All those things. And then we said, we will give you a safe harbor—which I think all of you are that are legislators know what that is. We will basically not go after you for unlicensed practice as a mental health therapist if you follow these sort of ten different best practices that our office did a ton of research, and reaching out to experts and stakeholders, about putting in proper guardrails. So it doesn’t hallucinate, it’s not giving bad recommendations, and that kind of thing, and what it’s been trained on—those kinds of things.
And so that was sort of a pathway to innovate in this space but also putting some guardrails in place. And I think that gave that kind of certainty to innovators in the space. So—and we were informed by that, by the way, because we had a mental health therapy bot company come to us and wanting regulatory mitigation or relief. So that helped to kind of inform that legislation. On the AI companions, we’re working on legislation right now. The way we did it for mental health therapy bots, it was very narrowly targeted to those bots that said that they provided that kind of therapeutic—those therapeutic services. Whereas AI companion bots is sort of much more broad. And even just the definition—I’m sure you’re already thinking about this as a legislator—the definition, it was a little tricky.
So we’re working on that. We’re mostly focused on—we don’t have it finalized yet, but we’re happy to be in contact with you if you want to share ideas. We’re happy to do that. But we’re working on concepts like monitoring and reporting, making a different experience for kids so it’s more protective for kids, and then also some guardrails around collecting data, kind of that privacy thing. I think those are the main things that we’re looking at. But we’d be happy to engage with you. You’re welcome to reach out. My email is [email protected]. And I can connect you with our Office of AI Policy, because we’d love to hear what other states are doing.
FASKIANOS: Wonderful. Thank you.
I’m going to take the next question from Connecticut Representative Nick Menapace. If you can correct my pronunciation, please.
Q: Thank you. Yeah, it’s Nick Menapace. I hope you can hear me OK.
So I just had a question to kind of build off what was previously said. I was curious, especially with Utah, where we’re talking about the need for a lot of these data centers and the areas for need of water, we’re also talking to the need of energy. And I was curious to see what some of these other states are doing when we have this, because, you know, we have a lot of these areas with a lot of space, and they’re away from other things, and that’s where some people would love to put data centers. But they’re not near much in the way of water. And I wanted to see what the ideas are for this.
At the same time, I know we’ve also had issues about the regulation when it comes to green energy and for nuclear energy. And I wasn’t sure if we had any impact of what that’s going to say. Like, we’re talking a lot about different possible uses of nuclear energy in AI. I know there is an attempt to build a data center near me, and it is directly next to where there’s a nuclear power plant, which seems ideal. But I know there’s also people who are very concerned about how that will affect their energy rates, especially because we have a large wind farm project that is attempting to be shut down by the federal government. And I want to know what other states are doing about this, what you guys are doing about this.
BUSSE: Yeah, I don’t feel like I have really good, specific answers, because I think we haven’t really decided. One of the things that we did do, though, is that we put into legislation this past year a way for large load customers to be able to go off of the regulated utility. But they kind of have togive the regulated utility sort of a first right of refusal-ish type thing. They have to ask them first if they can handle it. But then you can allow off the grid, or even just other producers of energy to come in and service that data center, if the utility doesn’t want to—if it can’t take on that kind of a large load. And it potentially can hook in still actually to the grid, or not depending on what they want to do. So we passed some kind of enabling legislation, if you will, that I think will hopefully be helpful to sort of drive new energy there.
But in terms of where they’re sited, this is exactly what we’re working on right now. In fact, in my inbox I have the first cut of an analysis of this, which I haven’t read yet because it just came over yesterday. But I think, again, it has to take into account all these different factors of air shed, and water use, and the amount of tax revenue it could bring into certain communities. But certain communities may benefit, but then the whole state may not, because if it has to pay more for rate, you know, in their electrical rates, or if it’s—you know, now it’s causing, you know, further water shortages. You know, we live in a desert so we have to really be careful about that. Connecticut, not so much. You know, or if it’s causing, you know, us to compromise our air shed.
So those are all things that have to be kind of put into a model, which is what we’re working on right now, so we have a better way to analyze and give answers when data centers come to us. Which they are coming to us. And we’re working with our power company or with our utility so that—to try to enable them to do—to build up resources as well, but, again, in a way that is not going to be born on the backs of utility ratepayers. That’s what’s going to cause a lot of backlash, and it sounds like may already be doing that there in Connecticut.
FASKIANOS: Adam, do you want to weigh in or should I go to the next question?
All right, I’m going to go to the next question. Which is from Brian Peete. And you need to unmute yourself. There you go. And identify yourself.
Q: Hi. My name is Brian Peete. I’m the director of the Riley County Police Department in Manhattan, Kansas. I’m also a board member of CIT, Crisis Intervention Team International, with law enforcement and crisis response.
And my question’s pretty much for Margaret, regarding where do you see AI—or, how has AI—or, have you thought about AI in the mental health aspect with crisis response and law enforcement? And are there any opportunities for us to reach out to you, for me to reach out to you, to talk about some of the things that we’re doing in this space, to include preventing active shooter violence?
BUSSE: Yeah. We’d love to learn from you. That would be fantastic. One of the things I was mentioning earlier in the ideas that we have for our AI companion bill will include some sort of monitoring at—requirements to monitor and report where there are—where the AI company, the LLM, may detect some sort of suicidal type, you know, of conversation, and it could actually be very much applied in the case of an active shooter as well. Which would be fascinating. You know, somebody that’s talking to the AI chatbot about bringing guns to a situation—you know, whatever. That could be something that could be interesting to have as a flag, as a required flag for the company that’s providing the AI companion services. So, yeah, happy to talk more about that and learn from you.
FASKIANOS: (Off mic.)
BUSSE: I think you’re muted, Irina.
FASKIANOS: Oh, there I go.
I’m going to try to sneak in one last question from Michigan Commissioner Gail Patterson-Gladney: We’ve heard a lot about the AI bubble. What is it, and how will it affect local government?
BUSSE: Want to go ahead, Adam? (Laughter.)
FASKIANOS: Adam, you go first.
SEGAL: I mean, I think there’s several ways to define the bubble. One is, the question is are this specific type of AI we’ve been, you know, focusing on—large language models and generative AI—is it going to reach a plateau, and then, essentially, you know, it’ll be an important tool, but not the—not the gamechanger, you know, 500 Ph.D.s in a room inventing new things all the time. So there’s that question.
And then the other is just the massive—you know, trillions of dollars investment in infrastructure, data centers, energy, and then, you know, where’s the revenue all going to come from? Right now, the companies have started making some revenue, but what’s the model there? And are we going to see what we saw in 1999, which was, you know, we built out all this infrastructure for the internet, there was not as much demand. It took another ten years to find a demand for all of that. So I think those are the two bubbles that people are mainly focused on. And, you know, if there’s a bubble, there’ll be, I think, significant kind of economic pain for a number of years. And then people will pick up the pieces. I think, you know, this is a technology that is going to be transformative, is just the question about the scale.
FASKIANOS: Margaret.
BUSSE: Yeah. And I would just say, yeah, it’s very much to be determined. But I think there was a bubble. We were all—you know, many of us remember that in 2000, with the internet. But then it did actually bear out over time a tremendous value, even if it was sort of acutely hyper-valued or overvalued at that moment in time. And so I think that we’ll have to see where there could be those sort of mini bubbles, but I think overall in the long term we’re going to see tremendous value from this technology. We just have to get it deployed right. And so I feel optimistic about that, really, because of all the interest by policymakers like yourselves. Like, it’s just so important. We need to be involved in this. And Adam said something earlier about you can’t just say I don’t understand technology. Like, you have to engage with it. You have to figure out—you have to develop some level of knowledge so that you can. And then I think that’s what all of you are doing, certainly what I’ve had to do. I’m not a tech expert at all.
So I think there’s some interesting opportunities here. And I do think with—on the energy piece, one of the things we’re going to have to watch carefully is, you know, DeepSeek, one of the things that I think freaked everyone out was that it used a lot less energy. And it made people rethink, do we actually need to build as much energy as we’re saying? And I think that’s something that we’ll have to—we don’t want to overbuild, especially because there’s an opportunity cost to that. And actually, there’s an opportunity cost to the investment that we’re making. It’s sort of crowding out investments in everything else, because everything is going into AI-related companies. And so we may be missing other opportunities as well. So but that’s the market. That’s the market. And we’re going to have to see how that bears out.
FASKIANOS: Wonderful. This has been a great hour. Thank you very much to both of you for joining us, to all your questions to the group, for the work you’re doing in your communities. I’m sorry that we could not get to all of you, but we will have to revisit this topic. So, again, Adam Segal, Margaret Busse, thank you. We will send a link to the webinar recording and transcript, contact information. And, as always, we encourage you to visit CFR.org. ForeignAffairs.com, and ThinkGlobalHealth.org for the latest analysis on international trends and how they’re affecting the United States. And you can also email us for suggestions for future webinar topics. You can email [email protected]. So thank you both, and thank you all. And enjoy the rest of your day.
(END)